Research Article | Open Access
Volume 2025 |Article ID 100064 | https://doi.org/10.1016/j.plaphe.2025.100064

An unsupervised semantic segmentation network for wood–leaf separation in 3D point clouds

Yijun Zhong,1 Jiaohua Qin,1 Shuai Liu ,1 Zhenyan Ma,1,2 Exian Liu,1 and Hui Fan1

1Central South University of Forestry and Technology, Changsha, 410004, China
2Hirosake University, Hirosaki, 036-8561, Japan

Received 
18 Jan 2025
Accepted 
27 May 2025
Published
06 Jun 2025

Abstract

Separating wood and leaf components in tree point clouds is one of the key tasks for achieving automated forest inventory and management. To obtain accurate wood‒leaf separation results, traditional methods typically rely on large amounts of annotated point cloud data to train supervised semantic segmentation networks. However, point wise annotation is not only extremely labor-intensive but also time-consuming and costly, which greatly limits the widespread application and adoption of supervised learning methods in wood‒leaf separation tasks. To eliminate the dependence on annotated point clouds, this study explores the feasibility of wood‒leaf separation under completely unsupervised conditions. To this end, we propose an unsupervised semantic segmentation network that is capable of directly extracting wood and leaf components in 3D point clouds. The network adopts a sparse convolutional neural network as the backbone and incorporates two custom-designed modules: the dual point attention (DPA) module and the point cloud feature convolutional integrator (PFCI) module, for enhanced feature fusion and extraction. Semantic classification is then achieved by generating pseudolabels via super point clustering. Based on large-scale public datasets containing coniferous and broadleaf forests, in addition to our self-constructed dataset, our proposed network achieved an overall accuracy (oAcc) of 67.583%, a mean classification accuracy (mAcc) of 50.249%, and a mean intersection over union (mIoU) of 38.512%, and in wood and leaf separation at the tree level, it attained an oAcc of 80.856%, a mAcc of 64.013%, and a mIoU of 49.695%. Across both the forest and tree scenarios, our network outperforms the current state-of-the-art methods, namely, GrowSP and PointDC. Ablation experiments further confirm that each of the proposed modules contributes significantly to improving the segmentation accuracy, and in addition, our segmentation network demonstrates strong robustness even under high occlusion rates and exhibits excellent generalization capability.

© 2019-2023   Plant Phenomics. All rights Reserved.  ISSN 2643-6515.

Back to top